456 research outputs found

    A posteriori agreement as a quality measure for readability prediction systems

    Get PDF
    All readability research is ultimately concerned with the research question whether it is possible for a prediction system to automatically determine the level of readability of an unseen text. A significant problem for such a system is that readability might depend in part on the reader. If different readers assess the readability of texts in fundamentally different ways, there is insufficient a priori agreement to justify the correctness of a readability prediction system based on the texts assessed by those readers. We built a data set of readability assessments by expert readers. We clustered the experts into groups with greater a priori agreement and then measured for each group whether classifiers trained only on data from this group exhibited a classification bias. As this was found to be the case, the classification mechanism cannot be unproblematically generalized to a different user group

    Standardised Reputation Measurement

    Get PDF
    Well-defined formal definitions for sentiment and opinion are extended to incorporate the necessary elements to provide a formal quantitative definition of reputation. This definition takes the form of a time-based index, in which each element is a function of a collection of opinions mined during a given time period. The resulting formal definition is validated against informal notions of reputation. Practical aspects of data procurement to support such a reputation index are discussed. The assumption that all mined opinions comprise a complete set is questioned. A case is made that unexpressed positive sentiment exists, and can be quantified.Comment: 8 pages, submitted to IDEAL 2017, October/November Guilin Chin

    Marginal Release Under Local Differential Privacy

    Full text link
    Many analysis and machine learning tasks require the availability of marginal statistics on multidimensional datasets while providing strong privacy guarantees for the data subjects. Applications for these statistics range from finding correlations in the data to fitting sophisticated prediction models. In this paper, we provide a set of algorithms for materializing marginal statistics under the strong model of local differential privacy. We prove the first tight theoretical bounds on the accuracy of marginals compiled under each approach, perform empirical evaluation to confirm these bounds, and evaluate them for tasks such as modeling and correlation testing. Our results show that releasing information based on (local) Fourier transformations of the input is preferable to alternatives based directly on (local) marginals

    Going to great lengths in the pursuit of luxury:how longer brand names can enhance the luxury perception of a brand

    Get PDF
    Brand names are a crucial part of the brand equity and marketing strategy of any company. Research suggests that companies spend considerable time and money to create suitable names for their brands and products. This paper uses the Zipf's law (or Principle of Least Effort) to analyze the perceived luxuriousness of brand names. One of the most robust laws in linguistics, Zipf's law describes the inverse relationship between a word's length and its frequency i.e., the more frequently a word is used in language, the shorter it tends to be. Zipf's law has been applied to many fields of science and in this paper, we provide evidence for the idea that because polysyllabic words (and brand names) are rare in everyday conversation, they are considered as more complex, distant, and abstract and that the use of longer brand names can enhance the perception of how luxurious a brand is (compared with shorter brand names, which are considered to be close, frequent, and concrete to consumers). Our results suggest that shorter names (mono‐syllabic) are better suited to basic brands whereas longer names (tri‐syllabic or more) are more appropriate for luxury brands

    Self-Destructing Models: Increasing the Costs of Harmful Dual Uses in Foundation Models

    Full text link
    A growing ecosystem of large, open-source foundation models has reduced the labeled data and technical expertise necessary to apply machine learning to many new problems. Yet foundation models pose a clear dual-use risk, indiscriminately reducing the costs of building both harmful and beneficial machine learning systems. To mitigate this risk, we propose the task blocking paradigm, in which foundation models are trained with an additional mechanism to impede adaptation to harmful tasks while retaining good performance on desired tasks. We call the resulting models self-destructing models, inspired by mechanisms that prevent adversaries from using tools for harmful purposes. We present an algorithm for training self-destructing models leveraging techniques from meta-learning and adversarial learning, showing that it can largely prevent a BERT-based model from learning to perform gender identification without harming the model's ability to perform profession classification. We conclude with a discussion of future directions.Comment: Presented at the First Workshop of Pre-training: Perspectives, Pitfalls, and Paths Forward (ICML, 2022) and New Frontiers in Adversarial Machine Learning Workshop (ICML, 2022

    Which words are hard to recognize? Prosodic, lexical, and disfluency factors that increase speech recognition error rates

    Get PDF
    International audienceDespite years of speech recognition research, little is known about which words tend to be misrecognized and why. Previous work has shown that errors increase for infrequent words, short words, and very loud or fast speech, but many other presumed causes of error (e.g., nearby disfluencies, turn-initial words, phonetic neighborhood density) have never been carefully tested. The reasons for the huge differences found in error rates between speakers also remain largely mysterious. Using a mixed-effects regression model, we investigate these and other factors by analyzing the errors of two state-of-the-art recognizers on conversational speech. Words with higher error rates include those with extreme prosodic characteristics, those occurring turn-initially or as discourse markers, and : acoustically similar words that also have similar language model probabilities. Words preceding disfluent interruption points (first repetition tokens and words before fragments) also have higher error rates. Finally, even after accounting for other factors, speaker differences cause enormous variance in error rates, suggesting that speaker error rate variance is not fully explained by differences in word choice, fluency, or prosodic characteristics. We also propose that doubly confusable pairs, rather than high neighborhood density, may better explain phonetic neighborhood errors in human speech processing
    corecore